Graphic layout designs play an essential role in visual communication. Yet handcrafting layout designs are skill-demanding, time-consuming, and non-scalable to batch production. Although generative models emerge to make design automation no longer utopian, it remains non-trivial to customize designs that comply with designers' multimodal desires, i.e., constrained by background images and driven by foreground contents. In this study, we propose \textit{LayoutDETR} that inherits the high quality and realism from generative modeling, in the meanwhile reformulating content-aware requirements as a detection problem: we learn to detect in a background image the reasonable locations, scales, and spatial relations for multimodal elements in a layout. Experiments validate that our solution yields new state-of-the-art performance for layout generation on public benchmarks and on our newly-curated ads banner dataset. For practical usage, we build our solution into a graphical system that facilitates user studies. We demonstrate that our designs attract more subjective preference than baselines by significant margins. Our code, models, dataset, graphical system, and demos are available at https://github.com/salesforce/LayoutDETR.
translated by 谷歌翻译
The understanding capabilities of current state-of-the-art 3D models are limited by datasets with a small number of annotated data and a pre-defined set of categories. In its 2D counterpart, recent advances have shown that similar problems can be significantly alleviated by employing knowledge from other modalities, such as language. Inspired by this, leveraging multimodal information for 3D modality could be promising to improve 3D understanding under the restricted data regime, but this line of research is not well studied. Therefore, we introduce ULIP to learn a unified representation of image, text, and 3D point cloud by pre-training with object triplets from the three modalities. To overcome the shortage of training triplets, ULIP leverages a pre-trained vision-language model that has already learned a common visual and textual space by training with massive image-text pairs. Then, ULIP learns a 3D representation space aligned with the common image-text space, using a small number of automatically synthesized triplets. ULIP is agnostic to 3D backbone networks and can easily be integrated into any 3D architecture. Experiments show that ULIP effectively improves the performance of multiple recent 3D backbones by simply pre-training them on ShapeNet55 using our framework, achieving state-of-the-art performance in both standard 3D classification and zero-shot 3D classification on ModelNet40 and ScanObjectNN. ULIP also improves the performance of PointMLP by around 3% in 3D classification on ScanObjectNN, and outperforms PointCLIP by 28.8% on top-1 accuracy for zero-shot 3D classification on ModelNet40. Our code and pre-trained models will be released.
translated by 谷歌翻译
Data heterogeneity across clients in federated learning (FL) settings is a widely acknowledged challenge. In response, personalized federated learning (PFL) emerged as a framework to curate local models for clients' tasks. In PFL, a common strategy is to develop local and global models jointly - the global model (for generalization) informs the local models, and the local models (for personalization) are aggregated to update the global model. A key observation is that if we can improve the generalization ability of local models, then we can improve the generalization of global models, which in turn builds better personalized models. In this work, we consider class imbalance, an overlooked type of data heterogeneity, in the classification setting. We propose FedNH, a novel method that improves the local models' performance for both personalization and generalization by combining the uniformity and semantics of class prototypes. FedNH initially distributes class prototypes uniformly in the latent space and smoothly infuses the class semantics into class prototypes. We show that imposing uniformity helps to combat prototype collapse while infusing class semantics improves local models. Extensive experiments were conducted on popular classification datasets under the cross-device setting. Our results demonstrate the effectiveness and stability of our method over recent works.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
The role of mobile cameras increased dramatically over the past few years, leading to more and more research in automatic image quality enhancement and RAW photo processing. In this Mobile AI challenge, the target was to develop an efficient end-to-end AI-based image signal processing (ISP) pipeline replacing the standard mobile ISPs that can run on modern smartphone GPUs using TensorFlow Lite. The participants were provided with a large-scale Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The runtime of the resulting models was evaluated on the Snapdragon's 8 Gen 1 GPU that provides excellent acceleration results for the majority of common deep learning ops. The proposed solutions are compatible with all recent mobile GPUs, being able to process Full HD photos in less than 20-50 milliseconds while achieving high fidelity results. A detailed description of all models developed in this challenge is provided in this paper.
translated by 谷歌翻译
最近,基于云的图形卷积网络(GCN)在许多对隐私敏感的应用程序(例如个人医疗保健和金融系统)中表现出了巨大的成功和潜力。尽管在云上具有很高的推理准确性和性能,但在GCN推理中保持数据隐私,这对于这些实际应用至关重要,但仍未得到探索。在本文中,我们对此进行了初步尝试,并开发了$ \ textit {cryptogcn} $ - 基于GCN推理框架的同型加密(HE)。我们方法成功的关键是减少HE操作的巨大计算开销,这可能比明文空间中的同行高的数量级。为此,我们开发了一种方法,可以有效利用GCN推断中基质操作的稀疏性,从而大大减少计算开销。具体而言,我们提出了一种新型的AMA数据格式方法和相关的空间卷积方法,该方法可以利用复杂的图结构并在HE计算中执行有效的矩阵矩阵乘法,从而大大减少HE操作。我们还开发了一个合作式框架,该框架可以通过明智的修剪和GCN中激活模块的多项式近似来探索准确性,安全级别和计算开销之间的交易折扣。基于NTU-Xview骨架关节数据集,即,据我们所知,最大的数据集对同型的评估,我们的实验结果表明,$ \ textit {cryptogcn} $均优胜于最先进的解决方案。同构操作的延迟和数量,即在延迟上达到3.10 $ \ times $加速,并将总代态操作数量减少77.4 \%,而准确度的较小精度损失为1-1.5 $ \%$。
translated by 谷歌翻译
尽管将进化计算整合到增强学习中的新进展,但缺乏高性能平台可赋予合成性和大规模的并行性,这对与异步商业游戏相关的研究和应用造成了非平凡的困难。在这里,我们介绍了Lamarckian-一个开源平台,其支持进化增强学习可扩展到分布式计算资源的支持。为了提高训练速度和数据效率,拉马克人采用了优化的通信方法和异步进化增强学习工作流程。为了满足商业游戏和各种方法对异步界面的需求,Lamarckian量身定制了异步的马尔可夫决策过程界面,并设计了带有脱钩模块的面向对象的软件体系结构。与最先进的RLLIB相比,我们从经验上证明了Lamarckian在基准测试中具有多达6000 CPU核心的独特优势:i)i)在Google足球游戏上运行PPO时,采样效率和训练速度都翻了一番; ii)在乒乓球比赛中运行PBT+PPO时,训练速度的速度快13倍。此外,我们还提出了两种用例:i)如何将拉马克安应用于生成行为多样性游戏AI; ii)Lamarckian如何应用于游戏平衡测试的异步商业游戏。
translated by 谷歌翻译
深度学习(DL)的快速增长和部署目睹了新兴的隐私和安全问题。为了减轻这些问题,已经讨论了安全的多方计算(MPC),以实现隐私保护DL计算。在实践中,它们通常是在很高的计算和沟通开销中,并有可能禁止其在大规模系统中的受欢迎程度。两种正交研究趋势吸引了人们对安全深度学习的能源效率的巨大兴趣,即MPC比较方案的高架降低和硬件加速度。但是,他们要么达到较低的减少比率,因此由于计算和通信节省有限而遭受了高潜伏期,或者是渴望的,因为现有的作品主要集中在CPU和GPU等一般计算平台上。在这项工作中,作为第一次尝试,我们通过将加密构件构建块的硬件延迟整合到DNN损耗功能中,以实现高能量效率,开发了一个系统的polympcnet,以减少MPC比较协议和硬件加速的联合额外降低的系统框架Polympcnet。和安全保证。我们的关键设计原理不是在DNN进行良好训练之后(通过删除或删除某些非物质操作员)训练(通过删除或删除某些非物质操作员)之后检查模型敏感性,而是要准确地执行DNN设计中的假设 - 培训DNN既是DNN都硬件有效且安全,同时逃脱了当地的最小值和鞍点并保持高精度。更具体地说,我们提出了通过多项式激活初始化方法直接提出的加密硬件友好的可训练多项式激活功能,以替代昂贵的2P-RELU操作员。我们开发了一个密码硬件调度程序和现场可编程门阵列(FPGA)平台的相应性能模型。
translated by 谷歌翻译
我们提出了Patron,这是一种新方法,它使用基于及时的不确定性估计,用于在冷启动场景下进行预训练的语言模型进行微调的数据选择,即,没有初始标记的数据可用。在顾客中,我们设计(1)一种基于迅速的不确定性传播方法来估计数据点的重要性和(2)分区 - 然后 - 剥离(PTR)策略,以促进对注释的样品多样性。六个文本分类数据集的实验表明,赞助人的表现优于最强的冷启动数据选择基准,高达6.9%。此外,仅具有128个标签,顾客分别基于香草微调和及时的学习,获得了91.0%和92.1%的全面监督性能。我们的赞助人实施可在\ url {https://github.com/yueyu1030/patron}上获得。
translated by 谷歌翻译